9 research outputs found
BWIBots: A platform for bridging the gap between AI and human–robot interaction research
Recent progress in both AI and robotics have enabled the development of general purpose robot platforms that are capable of executing a wide variety of complex, temporally extended service tasks in open environments. This article introduces a novel, custom-designed multi-robot platform for research on AI, robotics, and especially human–robot interaction for service robots. Called BWIBots, the robots were designed as a part of the Building-Wide Intelligence (BWI) project at the University of Texas at Austin. The article begins with a description of, and justification for, the hardware and software design decisions underlying the BWIBots, with the aim of informing the design of such platforms in the future. It then proceeds to present an overview of various research contributions that have enabled the BWIBots to better (a) execute action sequences to complete user requests, (b) efficiently ask questions to resolve user requests, (c) understand human commands given in natural language, and (d) understand human intention from afar. The article concludes with a look forward towards future research opportunities and applications enabled by the BWIBot platform
Recommended from our members
On-demand coordination of multiple service robots
Research in recent years has made it increasingly plausible to deploy a large number of service robots in home and office environments. Given that multiple mobile robots may be available in the environment performing routine duties such as cleaning, building maintenance, or patrolling, and that each robot may have a set of basic interfaces and manipulation tools to interact with one another as well as humans in the environment, is it possible to coordinate multiple robots for a previously unplanned on-demand task? The research presented in this dissertation aims to begin answering this question.
This dissertation makes three main contributions. The first contribution of this work is a formal framework for coordinating multiple robots to perform an on-demand task while balancing two objectives: (i) complete this on-demand task as quickly as possible, and (ii) minimize the total amount of time each robot is diverted from its routine duties. We formalize this stochastic sequential decision making problem, termed on-demand multi-robot coordination, as a Markov decision Process (MDP). Furthermore, we study this problem in the context of a specific on-demand task called multi-robot human guidance, where multiple robots need to coordinate and efficiently guide a visitor to his destination.
Second, we develop and analyze stochastic planning algorithms, in order to efficiently solve the on-demand multi-robot coordination problem in real-time. Monte Carlo Tree Search (MCTS) planning algorithms have demonstrated excellent results solving MDPs with large state-spaces and high action branching. We propose variants to the MCTS algorithm that use biased backpropagation techniques for value estimation, which can help MCTS converge to reasonable yet suboptimal policies quickly when compared to standard unbiased Monte Carlo backpropagation. In addition to using these planning algorithms for efficiently solving the on-demand multi-robot coordination problem in real-time, we also analyze their performance using benchmark domains from the International Planning Competition (IPC).
The third and final contribution of this work is the development of a multi-robot system built on top of the Segway RMP platform at the Learning Agents Research Group, UT Austin, and the implementation and evaluation of the on-demand multi-robot coordination problem and two different planning algorithm on this platform. We also perform two studies using simulated environments, where real humans control a simulated avatar, to test the implementation of the MDP formalization and planning algorithms presented in this dissertation.Computer Science
Dynamically Constructed (PO)MDPs for Adaptive Robot Planning
To operate in human-robot coexisting environments, intelligent robots need to simultaneously reason with commonsense knowledge and plan under uncertainty. Markov decision processes (MDPs) and partially observable MDPs (POMDPs), are good at planning under uncertainty toward maximizing long-term rewards; P-LOG, a declarative programming language under Answer Set semantics, is strong in commonsense reasoning. In this paper, we present a novel algorithm called iCORPP to dynamically reason about, and construct (PO)MDPs using P-LOG. iCORPP successfully shields exogenous domain attributes from (PO)MDPs, which limits computational complexity and enables (PO)MDPs to adapt to the value changes these attributes produce. We conduct a number of experimental trials using two example problems in simulation and demonstrate iCORPP on a real robot. Results show significant improvements compared to competitive baselines
Planning in Action Language BC while Learning Action Costs for Mobile Robots
The action language BC provides an elegant way of formalizing dynamic domains which involve indirect effects of actions and recursively defined fluents. In complex robot task planning domains, it may be necessary for robots to plan with incomplete information, and reason about indirect or recursive action effects. In this paper, we demonstrate how BC can be used for robot task planning to solve these issues. Additionally, action costs are incorporated with planning to produce optimal plans, and we estimate these costs from experience making planning adaptive. This paper presents the first application of BC on a real robot in a realistic domain, which involves human-robot interaction for knowledge acquisition, optimal plan generation to minimize navigation time, and learning for adaptive planning